62 research outputs found

    Interactive Robot Learning of Gestures, Language and Affordances

    Full text link
    A growing field in robotics and Artificial Intelligence (AI) research is human-robot collaboration, whose target is to enable effective teamwork between humans and robots. However, in many situations human teams are still superior to human-robot teams, primarily because human teams can easily agree on a common goal with language, and the individual members observe each other effectively, leveraging their shared motor repertoire and sensorimotor resources. This paper shows that for cognitive robots it is possible, and indeed fruitful, to combine knowledge acquired from interacting with elements of the environment (affordance exploration) with the probabilistic observation of another agent's actions. We propose a model that unites (i) learning robot affordances and word descriptions with (ii) statistical recognition of human gestures with vision sensors. We discuss theoretical motivations, possible implementations, and we show initial results which highlight that, after having acquired knowledge of its surrounding environment, a humanoid robot can generalize this knowledge to the case when it observes another agent (human partner) performing the same motor actions previously executed during training.Comment: code available at https://github.com/gsaponaro/glu-gesture

    Learning at the Ends: From Hand to Tool Affordances in Humanoid Robots

    Full text link
    One of the open challenges in designing robots that operate successfully in the unpredictable human environment is how to make them able to predict what actions they can perform on objects, and what their effects will be, i.e., the ability to perceive object affordances. Since modeling all the possible world interactions is unfeasible, learning from experience is required, posing the challenge of collecting a large amount of experiences (i.e., training data). Typically, a manipulative robot operates on external objects by using its own hands (or similar end-effectors), but in some cases the use of tools may be desirable, nevertheless, it is reasonable to assume that while a robot can collect many sensorimotor experiences using its own hands, this cannot happen for all possible human-made tools. Therefore, in this paper we investigate the developmental transition from hand to tool affordances: what sensorimotor skills that a robot has acquired with its bare hands can be employed for tool use? By employing a visual and motor imagination mechanism to represent different hand postures compactly, we propose a probabilistic model to learn hand affordances, and we show how this model can generalize to estimate the affordances of previously unseen tools, ultimately supporting planning, decision-making and tool selection tasks in humanoid robots. We present experimental results with the iCub humanoid robot, and we publicly release the collected sensorimotor data in the form of a hand posture affordances dataset.Comment: dataset available at htts://vislab.isr.tecnico.ulisboa.pt/, IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob 2017

    Accuracy of ChatGPT-Generated Information on Head and Neck and Oromaxillofacial Surgery: A Multicenter Collaborative Analysis

    Get PDF
    Objective: To investigate the accuracy of Chat-Based Generative Pre-trained Transformer (ChatGPT) in answering questions and solving clinical scenarios of head and neck surgery. Study design: Observational and valuative study. Setting: Eighteen surgeons from 14 Italian head and neck surgery units. Methods: A total of 144 clinical questions encompassing different subspecialities of head and neck surgery and 15 comprehensive clinical scenarios were developed. Questions and scenarios were inputted into ChatGPT4, and the resulting answers were evaluated by the researchers using accuracy (range 1-6), completeness (range 1-3), and references' quality Likert scales. Results: The overall median score of open-ended questions was 6 (interquartile range[IQR]: 5-6) for accuracy and 3 (IQR: 2-3) for completeness. Overall, the reviewers rated the answer as entirely or nearly entirely correct in 87.2% of cases and as comprehensive and covering all aspects of the question in 73% of cases. The artificial intelligence (AI) model achieved a correct response in 84.7% of the closed-ended questions (11 wrong answers). As for the clinical scenarios, ChatGPT provided a fully or nearly fully correct diagnosis in 81.7% of cases. The proposed diagnostic or therapeutic procedure was judged to be complete in 56.7% of cases. The overall quality of the bibliographic references was poor, and sources were nonexistent in 46.4% of the cases. Conclusion: The results generally demonstrate a good level of accuracy in the AI's answers. The AI's ability to resolve complex clinical scenarios is promising, but it still falls short of being considered a reliable support for the decision-making process of specialists in head-neck surgery

    Diagnostic Accuracy of Obstructive Airway Adult Test for Diagnosis of Obstructive Sleep Apnea

    Get PDF
    Rationale. The gold standard for the diagnosis of Obstructive Sleep Apnea (OSA) is polysomnography, whose access is however reduced by costs and limited availability, so that additional diagnostic tests are needed. Objectives. To analyze the diagnostic accuracy of the Obstructive Airway Adult Test (OAAT) compared to polysomnography for the diagnosis of OSA in adult patients. Methods. Ninety patients affected by OSA verified with polysomnography (AHI ≥ 5) and ten healthy patients, randomly selected, were included and all were interviewed by one blind examiner with OAAT questions. Measurements and Main Results. The Spearman rho, evaluated to measure the correlation between OAAT and polysomnography, was 0.72 ( < 0.01). The area under the ROC curve (95% CI) was the parameter to evaluate the accuracy of the OAAT: it was 0.91 (0.81-1.00) for the diagnosis of OSA (AHI ≥ 5), 0.90 (0.82-0.98) for moderate OSA (AHI ≥ 15), and 0.84 (0.76-0.92) for severe OSA (AHI ≥ 30). Conclusions. The OAAT has shown a high correlation with polysomnography and also a high diagnostic accuracy for the diagnosis of OSA. It has also been shown to be able to discriminate among the different degrees of severity of OSA. Additional large studies aiming to validate this questionnaire as a screening or diagnostic test are needed

    Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries

    Get PDF
    Abstract Background Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres. Methods This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries. Results In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia. Conclusion This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries

    Большевик. 1951. № 011

    Get PDF
    We propose a developmental approach that allows a robot to interpret and describe the actions of human agents by reusing previous experience. The robot first learns the association between words and object affordances by manipulating the objects in its environment. It then uses this information to learn a mapping between its own actions and those performed by a human in a shared environment. It finally fuses the information from these two models to interpret and describe human actions in light of its own experience. In our experiments, we show that the model can be used flexibly to do inference on different aspects of the scene. We can predict the effects of an action on the basis of object properties. We can revise the belief that a certain action occurred, given the observed effects of the human action. In an early action recognition fashion, we can anticipate the effects when the action has only been partially observed. By estimating the probability of words given the evidence and feeding them into a pre-defined grammar, we can generate relevant descriptions of the scene. We believe that this is a step towards providing robots with the fundamental skills to engage in social collaboration with humans

    Peritumoral vascular invasion and NHERF1 expression define an immunophenotype of grade 2 invasive breast cancer associated with poor prognosis

    Get PDF
    Abstract Background Traditional determinants proven to be of prognostic importance in breast cancer include the TNM staging, histological grade, proliferative activity, hormone receptor status and HER2 overexpression. One of the limitations of the histological grading scheme is that a high percentage of breast cancers are still classified as grade 2, a category with ambiguous clinical significance. The aim of this study was to best characterize tumors scored as grade 2. Methods We investigated traditional prognostic factors and a panel of tumor markers not used in routine diagnosis, such as NHERF1, VEGFR1, HIF-1α and TWIST1, in 187 primary invasive breast cancers by immunohistochemistry, stratifying patients into good and poor prognostic groups by the Nottingham Prognostic Index. Results Grade 2 subgroup analysis showed that the PVI (p = 0.023) and the loss of membranous NHERF1 (p = 0.028) were adverse prognostic factors. Relevantly, 72% of grade 2 tumors were associated to PVI+/membranous NHERF1- expression phenotype, characterizing an adverse prognosis (p = 0.000). Multivariate logistic regression analysis in the whole series revealed poor prognosis correlated with PVI and MIB1 (p = 0.000 and p = 0.001, respectively). Furthermore, in the whole series of breast cancers we found cytoplasmic NHERF1 expression positively correlated to VEGFR1 (r = 0.382, p = 0.000), and in VEGFR1-overexpressing tumors the oncogenic receptor co-localized with NHERF1 at cytoplasmic level. Conclusions The PVI+/membranous NHERF1- expression phenotype identifies a category of grade 2 tumors with the worst prognosis, including patient subgroup with a family history of breast cancer. These observations support the idea of the PVI+/membranous NHERF1- expression immunophenotype as a useful marker, which could improve the accuracy of predicting clinical outcome in grade 2 tumors.</p
    corecore